Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available March 1, 2026
-
Abstract Automated manufacturing feature recognition is a crucial link between computer-aided design and manufacturing, facilitating process selection and other downstream tasks in computer-aided process planning. While various methods such as graph-based, rule-based, and neural networks have been proposed for automatic feature recognition, they suffer from poor scalability or computational inefficiency. Recently, voxel-based convolutional neural networks have shown promise in solving these challenges but incur a tradeoff between computational cost and feature resolution. This paper investigates a computationally efficient sparse voxel-based convolutional neural network for manufacturing feature recognition, specifically, an octree-based sparse voxel convolutional neural network. This model is trained on a large-scale manufacturing feature dataset, and its performance is compared to a voxel-based feature recognition model (FeatureNet). The results indicate that the octree-based model yields higher feature recognition accuracy (99.5% on the test dataset) with 44% lower graphics processing unit (GPU) memory consumption than a voxel-based model of comparable resolution. In addition, increasing the resolution of the octree-based model enables recognition of finer manufacturing features. These results indicate that a sparse voxel-based convolutional neural network is a computationally efficient deep learning model for manufacturing feature recognition to enable process planning automation. Moreover, the sparse voxel-based neural network demonstrated comparable performance to a boundary representation-based feature recognition neural network, achieving similar accuracy in single-feature recognition without having access to the exact 3D shape descriptors.more » « lessFree, publicly-accessible full text available March 1, 2026
-
Free, publicly-accessible full text available February 1, 2026
-
Free, publicly-accessible full text available December 1, 2025
-
Given a part design, the task of manufacturing process selection chooses an appropriate manufacturing process to fabricate it. Prior research has traditionally determined manufacturing processes through direct classification. However, an alternative approach to select a manufacturing process for a new design involves identifying previously produced parts with comparable shapes and materials and learning from them. Finding similar designs from a large dataset of previously manufactured parts is a challenging problem. To solve this problem, researchers have proposed different spatial and spectral shape descriptors to extract shape features including the D2 distribution, spherical harmonics (SH), and the Fast Fourier Transform (FFT), as well as the application of different machine learning methods on various representations of 3D part models like multi-view images, voxel, triangle mesh, and point cloud. However, there has not been a comprehensive analysis of these different shape descriptors, especially for part similarity search aimed at manufacturing process selection. To remedy this gap, this paper presents an in-depth comparative study of these shape descriptors for part similarity search. While we acknowledge the importance of factors like part size, tolerance, and cost in manufacturing process selection, this paper focuses on part shape and material properties only. Our findings show that SH performs the best among non-machine learning methods for manufacturing process selection, yielding 97.96% testing accuracy using the proposed quantitative evaluation metric. For machine learning methods, deep learning on multi-view image representations is best, yielding 99.85% testing accuracy when rotational invariance is not a primary concern. Deep learning on point cloud representations excels, yielding 99.44% testing accuracy when considering rotational invariance.more » « less
-
Abstract Given a part design, the task of manufacturing process classification identifies an appropriate manufacturing process to fabricate it. Our previous research proposed a large dataset for manufacturing process classification and achieved accurate classification results based on a combination of a convolutional neural network (CNN) and the heat kernel signature for triangle meshes. In this paper, we constructed a classification method based on rotation invariant shape descriptors and a neural network, and it achieved better accuracy than all previous methods. This method uses a point cloud part representation, in contrast to the triangle mesh representation used in our previous work. The first step extracted rotation invariant features consisting of a set of distances between points in the point cloud. Then, the extracted shape descriptors were fed into a CNN for the classification of manufacturing processes. In addition, we provide two visualization methods for interpreting the intermediate layers of the neural network. Last, the performance of the method was tested on some ambiguous examples and their performances were consistent with expectations. In this paper, we have considered only shape information, while non-shape information like materials and tolerances were ignored. Additionally, only parts that require one manufacturing process were considered in this research. Our work demonstrates that part shape attributes alone are adequate for discriminating between different manufacturing processes considered.more » « less
-
Generative Design by Embedding Topology Optimization into Conditional Generative Adversarial NetworkAbstract Generative design (GD) techniques have been proposed to generate numerous designs at early design stages for ideation and exploration purposes. Previous research on GD using deep neural networks required tedious iterations between the neural network and design optimization, as well as post-processing to generate functional designs. Additionally, design constraints such as volume fraction could not be enforced. In this paper, a two-stage non-iterative formulation is proposed to overcome these limitations. In the first stage, a conditional generative adversarial network (cGAN) is utilized to control design parameters. In the second stage, topology optimization (TO) is embedded into cGAN (cGAN + TO) to ensure that desired functionality is achieved. Tests on different combinations of loss terms and different parameter settings within topology optimization demonstrated the diversity of generated designs. Further study showed that cGAN + TO can be extended to different load and boundary conditions by modifying these parameters in the second stage of training without having to retrain the first stage. Results demonstrate that GD can be realized efficiently and robustly by cGAN+TO.more » « less
An official website of the United States government
